Algorithmic linear dimension reduction in the l_1 norm for sparse vectors
نویسندگان
چکیده
We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or near-zeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several research communities are interested in techniques for measuring and recovering such signals and a variety of approaches have been proposed. We focus on two important properties of such algorithms. • Uniformity. A single measurement matrix should work simultaneously for all signals. • Computational Efficiency. The time to recover such an m-sparse signal should be close to the obvious lower bound, m log(d/m). To date, algorithms for signal recovery that provide a uniform measurement matrix with approximately the optimal number of measurements, such as first proposed by Donoho and his collaborators , and, separately, by Candès and Tao, are based on linear programming and require time poly(d) instead of m polylog(d). On the other hand, fast decoding algorithms to date from the Theoretical Computer Science and Database communities fail with probability at least 1/ poly(d), whereas we need failure probability no more than around 1/d m to achieve a uniform failure guarantee. This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(m log 2 (m) log 2 (d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in ℓ1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(m log 2 (d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algo-rithmic dimension reduction in the ℓ1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(m log 2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(m log 2 (m) log 2 (d)). Furthermore, this reconstruction is stable and robust under small perturbations.
منابع مشابه
Algorithmic Linear Dimension Reduction in the `1 Norm for Sparse Vectors
We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or near-zeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several research communities are interested in techniques for measuring and recovering such signals and a variety of ap...
متن کاملA new method to determine a well-dispersed subsets of non-dominated vectors for MOMILP problem
Multi-objective optimization is the simultaneous consideration of two or more objective functions that are completely or partially inconflict with each other. The optimality of such optimizations is largely defined through the Pareto optimality. Multiple objective integer linear programs (MOILP) are special cases of multiple criteria decision making problems. Numerous algorithms have been desig...
متن کاملSparse recovery using the preservability of the null space property under random measurements
The field of compressed sensing has become a major tool in high-dimensional analysis, with the realization that vectors can be recovered from relatively very few linear measurements as long as the vectors lie in a low-dimensional structure, typically the vectors that are zero in most coordinates with respect to a basis. However, there are many applications where we instead want to recover vecto...
متن کاملRegularized singular value decomposition: a sparse dimension reduction technique
Singular value decomposition (SVD) is a useful multivariate technique for dimension reduction. It has been successfully applied to analyze microarray data, where the eigen vectors are called eigen-genes/arrays. One weakness associated with the SVD is the interpretation. The eigen-genes are essentially linear combinations of all the genes. It is desirable to have sparse SVD, which retains the di...
متن کاملProjection Inequalities and Their Linear Preservers
This paper introduces an inequality on vectors in $mathbb{R}^n$ which compares vectors in $mathbb{R}^n$ based on the $p$-norm of their projections on $mathbb{R}^k$ ($kleq n$). For $p>0$, we say $x$ is $d$-projectionally less than or equal to $y$ with respect to $p$-norm if $sum_{i=1}^kvert x_ivert^p$ is less than or equal to $ sum_{i=1}^kvert y_ivert^p$, for every $dleq kleq n$. For...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/cs/0608079 شماره
صفحات -
تاریخ انتشار 2006